Future State 2025 by Hunter Muller

Future State 2025 by Hunter Muller

Author:Hunter Muller [Muller, Hunter]
Language: eng
Format: epub
ISBN: 9781119574811
Publisher: Wiley
Published: 2020-06-30T00:00:00+00:00


How Artificial Intelligence Is Transforming IT

Elon Musk has made more than his fair share of news, mostly talking about Tesla and SpaceX. But I think we also need to focus on another aspect of Musk, namely his stated concerns about artificial intelligence.

Musk isn't alone in expressing anxiety about AI. The late Stephen Hawking also warned that AI could wreak havoc on a world that's not prepared to manage its potentially destructive power.

AI isn't just another “feature” that will be bolted onto existing systems. AI will quickly replace existing systems, rendering them obsolete and irrelevant. Once AI spreads through the IT universe, all of our roles as technology leaders will fundamentally change. In short, we will experience the disruption that we often see happening in other industries.

Imagine a world with no systems administrators, software developers, or business analysts. That scenario will become a reality sooner than we imagine. In the very near future, AI will be baked deeply into every conceivable system and platform.

How will IT leaders add value when most of IT becomes fully automated? That's a hard question, and we must begin considering it seriously. The most pressing issue is acquiring talent. You'll need a process for identifying, recruiting, hiring, and retaining people who understand AI and who know how to use it. You'll need to create appealing work environments to attract the best minds and keep them focused.

It's not too soon to begin setting up talent pipelines. Are you reaching out to local colleges and universities? Are you actively recruiting people with data science skills? If you aren't, you should be.

Mark van Rijmenam of the Netherlands wrote a good post recently on the difference between “good AI” and “bad AI.”4 In his post, he argues that when AI is applied thoughtfully and carefully, its benefits outweigh its potential for causing harm. But when AI is applied haphazardly or indiscriminately, it can morph into something genuinely dangerous.

“Good AI,” he wrote, must be “explainable.” In other words, it can't be a black box. The AI's decision-making processes must be visible and understandable to the human mind. In essence, we need to know how it works and how it's making decisions. When we don't require an AI's processes to be transparent and understandable, we're abdicating our responsibilities.

AIs feed on data, so we also need to make sure that our data sources are clean and unbiased. We've already seen instances in which biased data has led AIs to make biased decisions, so this isn't science fiction. It's already happening.

In aviation, pilots are taught to stay ahead of their airplane's power curve. Allowing a plane to get behind the power curve is an invitation to disaster since it won't have enough power to recover if a problem arises.

In a sense, we're allowing ourselves to fall behind the AI power curve. After a certain point, it will be impossible to recover if something goes wrong. That's not where we want to be.



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.